8 research outputs found

    Online Maximum Matching with Recourse

    Get PDF
    We study the online maximum matching problem in a model in which the edges are associated with a known recourse parameter k. An online algorithm for this problem has to maintain a valid matching while edges of the underlying graph are presented one after the other. At any moment the algorithm can decide to include an edge into the matching or to exclude it, under the restriction that at most k such actions per edge take place, where k is typically a small constant. This problem was introduced and studied in the context of general online packing problems with recourse by Avitabile et al. [Avitabile et al., 2013], whereas the special case k=2 was studied by Boyar et al. [Boyar et al., 2017]. In the first part of this paper, we consider the edge arrival model, in which an arriving edge never disappears from the graph. Here, we first show an improved analysis on the performance of the algorithm AMP given in [Avitabile et al., 2013], by exploiting the structure of the matching problem. In addition, we extend the result of [Boyar et al., 2017] and show that the greedy algorithm has competitive ratio 3/2 for every even k and ratio 2 for every odd k. Moreover, we present and analyze an improvement of the greedy algorithm which we call L-Greedy, and we show that for small values of k it outperforms the algorithm of [Avitabile et al., 2013]. In terms of lower bounds, we show that no deterministic algorithm better than 1+1/(k-1) exists, improving upon the lower bound of 1+1/k shown in [Avitabile et al., 2013]. The second part of the paper is devoted to the edge arrival/departure model, which is the fully dynamic variant of online matching with recourse. The analysis of L-Greedy and AMP carry through in this model; moreover we show a lower bound of (k^2-3k+6)/(k^2-4k+7) for all even k >= 4. For k in {2,3}, the competitive ratio is 3/2

    Online Computation with Untrusted Advice

    Get PDF
    The advice model of online computation captures the setting in which the online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can choose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well- studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model

    Online Computation with Untrusted Advice

    Full text link
    The advice model of online computation captures a setting in which the algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze online algorithms that are robust and perform well even when the advice is generated in a malicious, adversarial manner. To this end, we focus on well-studied online problems such as ski rental, online bidding, bin packing, and list update. For ski-rental and online bidding, we show how to obtain algorithms that are Pareto-optimal with respect to the competitive ratios achieved; this improves upon the framework of Purohit et al. [NeurIPS 2018] in which Pareto-optimality is not necessarily guaranteed. For bin packing and list update, we give online algorithms with worst-case tradeoffs in their competitiveness, depending on whether the advice is trusted or not; this is motivated by work of Lykouris and Vassilvitskii [ICML 2018] on the paging problem, but in which the competitiveness depends on the reliability of the advice. Furthermore, we demonstrate how to prove lower bounds, within this model, on the tradeoff between the number of advice bits and the competitiveness of any online algorithm. Last, we study the effect of randomization: here we show that for ski-rental there is a randomized algorithm that Pareto-dominates any deterministic algorithm with advice of any size. We also show that a single random bit is not always inferior to a single advice bit, as it happens in the standard model

    Best-Of-Two-Worlds Analysis of Online Search

    Get PDF
    In search problems, a mobile searcher seeks to locate a target that hides in some unknown position of the environment. Such problems are typically considered to be of an on-line nature, in that the input is unknown to the searcher, and the performance of a search strategy is usually analyzed by means of the standard framework of the competitive ratio, which compares the cost incurred by the searcher to an optimal strategy that knows the location of the target. However, one can argue that even for simple search problems, competitive analysis fails to distinguish between strategies which, intuitively, should have different performance in practice. Motivated by the above, in this work we introduce and study measures supplementary to competitive analysis in the context of search problems. In particular, we focus on the well-known problem of linear search, informally known as the cow-path problem, for which there is an infinite number of strategies that achieve an optimal competitive ratio equal to 9. We propose a measure that reflects the rate at which the line is being explored by the searcher, and which can be seen as an extension of the bijective ratio over an uncountable set of requests. Using this measure we show that a natural strategy that explores the line aggressively is optimal among all 9-competitive strategies. This provides, in particular, a strict separation from the competitively optimal doubling strategy, which is much more conservative in terms of exploration. We also provide evidence that this aggressiveness is requisite for optimality, by showing that any optimal strategy must mimic the aggressive strategy in its first few explorations

    Calcul en ligne au-delĂ  des modĂšles standards

    No full text
    In the standard setting of online computation, the input is not entirely available from the beginning, but is revealed incrementally, piece by piece, as a sequence of requests. Whenever a request arrives, the online algorithm has to make immediately irrevocable decisions to serve the request, without knowledge on the future requests. Usually, the standard framework to evaluate the performance of online algorithms is competitive analysis, which compares the worst-case performance of an online algorithm to an offline optimal solution. In this thesis, we will study some new ways of looking at online problems. First, we study the online computation in the recourse model, in which the irrevocability on online decisions is relaxed. In other words, the online algorithm is allowed to go back and change previously made decisions. More precisely, we show how to identify the trade-off between the number of re-optimization and the performance of online algorithms for the online maximum matching problem. Moreover, we study measures other than competitive analysis for evaluating the performance of online algorithms. We observe that sometimes, competitive analysis cannot distinguish the performance of different algorithms due to the worst-case nature of the competitive ratio. We demonstrate that a similar situation arises in the linear search problem. More precisely, we revisit the linear search problem and introduce a measure, which can be applied as a refinement of the competitive ratio. Last, we study the online computation in the advice model, in which the algorithm receives as input not only a sequence of requests, but also some advice on the request sequence. Specifically, we study a recent model with untrusted advice, in which the advice can be either trusted or untrusted. Assume that in the latter case, the advice can be generated from a malicious source. We show how to identify a Pareto optimal strategy for the online bidding problem in the untrusted advice model.Dans le cadre standard du calcul en ligne, l’entrĂ©e de l’algorithme n’est pas entiĂšrement connue Ă  l’avance, mais elle est rĂ©vĂ©lĂ©e progressivement sous forme d’une sĂ©quence de requĂȘtes. Chaque fois qu'une requĂȘte arrive, l'algorithme en ligne doit prendre des dĂ©cisions irrĂ©vocables pour servir la demande, sans connaissance des requĂȘtes futures. Dans le domaine des algorithmes en ligne, le cadre standard utilisĂ© pour Ă©valuer les performances des algorithmes en ligne est l’analyse compĂ©titive. De maniĂšre informelle, le concept d’analyse compĂ©titive consiste Ă  comparer les performances d’un algorithme en ligne dans le pire des cas Ă  une solution optimale hors ligne qui aurait pu ĂȘtre calculĂ©e si toutes les donnĂ©es Ă©taient connues d’avance. Dans cette thĂšse, nous Ă©tudierons de nouvelles façons d'approcher les problĂšmes en ligne. Dans un premier temps, nous Ă©tudions le calcul en ligne dans le modĂšle avec rĂ©-optimisation, dans lequel l'irrĂ©vocabilitĂ© des dĂ©cisions en ligne est relĂąchĂ©e. Autrement dit, l'algorithme en ligne est autorisĂ© Ă  revenir en arriĂšre et changer les dĂ©cisions prĂ©cĂ©demment prises. Plus prĂ©cisĂ©ment, nous montrons comment identifier le compromis entre le nombre de rĂ©-optimisation et les performances des algorithmes en ligne pour le problĂšme de couplage maximale en ligne. De plus, nous Ă©tudions des mesures autres que l'analyse compĂ©titive pour Ă©valuer les performances des algorithmes en ligne. Nous observons que parfois, l'analyse compĂ©titive ne peut pas distinguer les performances de diffĂ©rents algorithmes en raison de la nature la plus dĂ©favorable du ratio de compĂ©titivitĂ©. Nous dĂ©montrons qu'une situation similaire se pose dans le problĂšme de la recherche linĂ©aire. Plus prĂ©cisĂ©ment, nous revisitons le problĂšme de la recherche linĂ©aire et introduisons une mesure, qui peut ĂȘtre appliquĂ©e comme un raffinement du ratio de compĂ©titivitĂ©. Enfin, nous Ă©tudions le calcul en ligne dans le modĂšle avec des conseils, dans lequel l'algorithme reçoit en entrĂ©e non seulement une sĂ©quence de requĂȘtes, mais aussi quelques conseils sur la sĂ©quence de requĂȘtes. Plus prĂ©cisĂ©ment, nous Ă©tudions un modĂšle rĂ©cent avec des conseils non fiables, dans lequel les conseils peuvent ĂȘtre fiables ou non. Supposons que dans ce dernier cas, les conseils peuvent ĂȘtre gĂ©nĂ©rĂ©s Ă  partir d'une source malveillante. Nous montrons comment identifier une stratĂ©gie optimale de Pareto pour le problĂšme online bidding dans le modĂšle de conseil non fiable

    Earliest-Completion Scheduling of Contract Algorithms with End Guarantees

    No full text
    International audienceWe consider the setting in which executions of contract algorithms are scheduled in a processor so as to produce an interruptible system. Such algorithms offer a trade off between the quality of output and the available computation time, provided that the latter is known in advance. Previous work on this setting has provided strict performance guarantees for several variants of this setting, assuming that an interruption can occur arbitrarily ahead in the future. In practice, however, one expects that the schedule will reach a point beyond which further progress will only be marginal, hence it can be deemed complete. In this work we show how to optimize the time at which the system reaches a desired performance objective, while maintaining interruptible guarantees throughout the entire execution. The resulting schedule is provably optimal, and it guarantees that upon completion each individual contract algorithm has attained a predefined end guarantee
    corecore